10,667 research outputs found

    Characteristic varieties of arrangements

    Full text link
    The k-th Fitting ideal of the Alexander invariant B of an arrangement A of n complex hyperplanes defines a characteristic subvariety, V_k(A), of the complex algebraic n-torus. In the combinatorially determined case where B decomposes as a direct sum of local Alexander invariants, we obtain a complete description of V_k(A). For any arrangement A, we show that the tangent cone at the identity of this variety coincides with R^1_k(A), one of the cohomology support loci of the Orlik-Solomon algebra. Using work of Arapura and Libgober, we conclude that all positive-dimensional components of V_k(A) are combinatorially determined, and that R^1_k(A) is the union of a subspace arrangement in C^n, thereby resolving a conjecture of Falk. We use these results to study the reflection arrangements associated to monomial groups.Comment: LaTeX2e, 20 pages. A reference to Libgober's recent work in math.AG/9801070 is added. Several points are clarified, a new example is include

    Alexander Invariants of Complex Hyperplane Arrangements

    Full text link
    Let A be an arrangement of complex hyperplanes. The fundamental group of the complement of A is determined by a braid monodromy homomorphism from a finitely generated free group to the pure braid group. Using the Gassner representation of the pure braid group, we find an explicit presentation for the Alexander invariant of A. From this presentation, we obtain combinatorial lower bounds for the ranks of the Chen groups of A. We also provide a combinatorial criterion for when these lower bounds are attained.Comment: 26 pages; LaTeX2e with amscd, amssymb package

    The boundary manifold of a complex line arrangement

    Get PDF
    We study the topology of the boundary manifold of a line arrangement in CP^2, with emphasis on the fundamental group G and associated invariants. We determine the Alexander polynomial Delta(G), and more generally, the twisted Alexander polynomial associated to the abelianization of G and an arbitrary complex representation. We give an explicit description of the unit ball in the Alexander norm, and use it to analyze certain Bieri-Neumann-Strebel invariants of G. From the Alexander polynomial, we also obtain a complete description of the first characteristic variety of G. Comparing this with the corresponding resonance variety of the cohomology ring of G enables us to characterize those arrangements for which the boundary manifold is formal.Comment: This is the version published by Geometry & Topology Monographs on 22 February 200

    Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting

    Full text link
    In lifelong learning systems, especially those based on artificial neural networks, one of the biggest obstacles is the severe inability to retain old knowledge as new information is encountered. This phenomenon is known as catastrophic forgetting. In this article, we propose a new kind of connectionist architecture, the Sequential Neural Coding Network, that is robust to forgetting when learning from streams of data points and, unlike networks of today, does not learn via the immensely popular back-propagation of errors. Grounded in the neurocognitive theory of predictive processing, our model adapts its synapses in a biologically-plausible fashion, while another, complementary neural system rapidly learns to direct and control this cortex-like structure by mimicking the task-executive control functionality of the basal ganglia. In our experiments, we demonstrate that our self-organizing system experiences significantly less forgetting as compared to standard neural models and outperforms a wide swath of previously proposed methods even though it is trained across task datasets in a stream-like fashion. The promising performance of our complementary system on benchmarks, e.g., SplitMNIST, Split Fashion MNIST, and Split NotMNIST, offers evidence that by incorporating mechanisms prominent in real neuronal systems, such as competition, sparse activation patterns, and iterative input processing, a new possibility for tackling the grand challenge of lifelong machine learning opens up.Comment: Key updates including results on standard benchmarks, e.g., split mnist/fmnist/not-mnist. Task selection/basal ganglia model has been integrate
    • …
    corecore